Description of the diagram on how accessibility works in Linux At the bird's eye view, there are three conceptual sections: 1. Applications being accessed (at the top) 2. Accessibility libraries (in the middle) 3. ATs (at the bottom) Taking each conceptual section in turn: 1. The applications-being-accessed section consists of three rows and visually reminds me of six small layer cakes. Those apps/cakes are as follows: a. LibreOffice, VCL, ATK Support b. Firefox, Gecko, ATK Support c. GNOME Shell, Clutter, "Cally" d. Epiphany, WebKitGtk, ATK Support e. Gedit, Gtk+, "Gail" f. KMail, Qt Note: The Epiphany layer extends past its WebKitGtk layer so that a small portion rests on top of the Gtk+ layer under Gedit. The idea is to illustrate that Epiphany is mostly using WebKitGtk, but uses a bit of Gtk+ for the application widgets. The middle section is quite simple, with two layers: one for the bridges (ATK Bridge and Qt-ATSPI2) and one for AT-SPI2. The ATK Bridge rectangle spans the width of the ATK-implementing apps/toolkits. The Qt-ATSPI2 Bridge rectangle shares the same width as KMail/Qt. The AT-SPI2 rectangle spans the full width of the diagram. The bottom section is also quite simple, consisting of the following single-layer rectangles: a. Screen Reader b. Screen Magnifier c. Speech for Users w/ LD d. Speech Recognition e. Testing Tools There are arrows connecting each conceptual section to its neighboring section(s). In between the applications being accessed and the bridges are a set of double-headed arrows extending between each cake and the corresponding bridge. This is meant to show that information between the toolkits and the bridge flows both ways. Similarly, there are arrows in between each AT and AT-SPI2. Some are double-headed, some are not: Screen reader: Double-headed, because screen readers not only receive information about apps from AT-SPI2 but they also sometimes use AT-SPI2 to manipulate the app/environment. For instance, Orca repositions the caret when the user navigates by heading in web content. Screen Magnifier and Speech for Users with LD: Single-headed, pointing to the ATs, because magnifiers and non-screen-reader speech output is likely to be on the receiving end only: presenting information from the environment rather than manipulating that environment. Speech Recognition: Single-headed, pointing from the AT to AT-SPI2 because speech recognition tools are used for input which in turn acts upon the environment being accessed. Testing Tools: Double-headed for the same reason screen reader is, namely both receiving input and potentially manipulating the environment.